16 research outputs found

    Constraining the Parameters of High-Dimensional Models with Active Learning

    Full text link
    Constraining the parameters of physical models with >5−10>5-10 parameters is a widespread problem in fields like particle physics and astronomy. The generation of data to explore this parameter space often requires large amounts of computational resources. The commonly used solution of reducing the number of relevant physical parameters hampers the generality of the results. In this paper we show that this problem can be alleviated by the use of active learning. We illustrate this with examples from high energy physics, a field where simulations are often expensive and parameter spaces are high-dimensional. We show that the active learning techniques query-by-committee and query-by-dropout-committee allow for the identification of model points in interesting regions of high-dimensional parameter spaces (e.g. around decision boundaries). This makes it possible to constrain model parameters more efficiently than is currently done with the most common sampling algorithms and to train better performing machine learning models on the same amount of data. Code implementing the experiments in this paper can be found on GitHub

    DeepXS: Fast approximation of MSSM electroweak cross sections at NLO

    Get PDF
    We present a deep learning solution to the prediction of particle production cross sections over a complicated, high-dimensional parameter space. We demonstrate the applicability by providing state-of-the-art predictions for the production of charginos and neutralinos at the Large Hadron Collider (LHC) at the next-to-leading order in the phenomenological MSSM-19 and explicitly demonstrate the performance for pp→χ~1+χ~1−,pp\to\tilde{\chi}^+_1\tilde{\chi}^-_1, χ~20χ~20\tilde{\chi}^0_2\tilde{\chi}^0_2 and χ~20χ~1±\tilde{\chi}^0_2\tilde{\chi}^\pm_1 as a proof of concept which will be extended to all SUSY electroweak pairs. We obtain errors that are lower than the uncertainty from scale and parton distribution functions with mean absolute percentage errors of well below 0.5 %0.5\,\% allowing a safe inference at the next-to-leading order with inference times that improve the Monte Carlo integration procedures that have been available so far by a factor of O(107)\mathcal{O}(10^7) from O(min)\mathcal{O}(\rm{min}) to O(μs)\mathcal{O}(\mu\rm{s}) per evaluation.Comment: 7 pages, 3 figure

    Identifying WIMP dark matter from particle and astroparticle data

    Get PDF
    One of the most promising strategies to identify the nature of dark matter consists in the search for new particles at accelerators and with so-called direct detection experiments. Working within the framework of simplified models, and making use of machine learning tools to speed up statistical inference, we address the question of what we can learn about dark matter from a detection at the LHC and a forthcoming direct detection experiment. We show that with a combination of accelerator and direct detection data, it is possible to identify newly discovered particles as dark matter, by reconstructing their relic density assuming they are weakly interacting massive particles (WIMPs) thermally produced in the early Universe, and demonstrating that it is consistent with the measured dark matter abundance. An inconsistency between these two quantities would instead point either towards additional physics in the dark sector, or towards a non-standard cosmology, with a thermal history substantially different from that of the standard cosmological model.Comment: 24 pages (+21 pages of appendices and references) and 14 figures. v2: Updated to match JCAP version; includes minor clarifications in text and updated reference

    Event Generation and Statistical Sampling for Physics with Deep Generative Models and a Density Information Buffer

    Get PDF
    We present a study for the generation of events from a physical process with deep generative models. The simulation of physical processes requires not only the production of physical events, but also to ensure these events occur with the correct frequencies. We investigate the feasibility of learning the event generation and the frequency of occurrence with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) to produce events like Monte Carlo generators. We study three processes: a simple two-body decay, the processes e+e−→Z→l+l−e^+e^-\to Z \to l^+l^- and pp→ttˉp p \to t\bar{t} including the decay of the top quarks and a simulation of the detector response. We find that the tested GAN architectures and the standard VAE are not able to learn the distributions precisely. By buffering density information of encoded Monte Carlo events given the encoder of a VAE we are able to construct a prior for the sampling of new events from the decoder that yields distributions that are in very good agreement with real Monte Carlo events and are generated several orders of magnitude faster. Applications of this work include generic density estimation and sampling, targeted event generation via a principal component analysis of encoded ground truth data, anomaly detection and more efficient importance sampling, e.g. for the phase space integration of matrix elements in quantum field theories.Comment: 24 pages, 10 figure

    A comparison of optimisation algorithms for high-dimensional particle and astrophysics applications

    Get PDF
    Optimisation problems are ubiquitous in particle and astrophysics, and involve locating the optimum of a complicated function of many parameters that may be computationally expensive to evaluate. We describe a number of global optimisation algorithms that are not yet widely used in particle astrophysics, benchmark them against random sampling and existing techniques, and perform a detailed comparison of their performance on a range of test functions. These include four analytic test functions of varying dimensionality, and a realistic example derived from a recent global fit of weak-scale supersymmetry. Although the best algorithm to use depends on the function being investigated, we are able to present general conclusions about the relative merits of random sampling, Differential Evolution, Particle Swarm Optimisation, the Covariance Matrix Adaptation Evolution Strategy, Bayesian Optimisation, Grey Wolf Optimisation, and the PyGMO Artificial Bee Colony, Gaussian Particle Filter and Adaptive Memory Programming for Global Optimisation algorithms

    3rd IML Machine Learning Workshop

    No full text
    We present a study for the generation of events from a physical process with generative deep learning. To simulate physical processes it is not only important to produce physical events, but also to produce the events with the right frequency of occurrence (density). We investigate the feasibility to learn the event generation and the frequency of occurrence with Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) to produce events like Monte Carlo generators. We study three toy models from high energy physics, i.e. a simple two-body decay, the processes e+e−→Z→l+l−e^+e^-\to Z \to l^+l^- and pp→ttˉp p \to t\bar{t} including the decay of the top quarks and a simulation of the detector response. We show that GANs and the standard VAE do not produce the right distributions. By buffering density information of Monte Carlo events in latent space given the encoder of a VAE we are able to construct a prior for the sampling of new events from the decoder that yields distributions that are in very good agreement with real Monte Carlo events and are generated O(108)\mathcal{O}(10^8) times faster. Applications of this work include generic density estimation and sampling, targeted event generation via a principal component analysis of encoded events in the latent space and the possibility to generate better random numbers for importance sampling, e.g. for the phase space integration of matrix elements in quantum perturbation theories. The method also allows to build event generators directly from real data events

    Direct or indirect electrification? A review of heat generation and road transport decarbonisation scenarios for Germany 2050

    Get PDF
    Energy scenarios provide guidance to energy policy, not least by presenting decarbonisation pathways for climate change mitigation. We review such scenarios for the example of Germany 2050, with a focus on the decarbonisation of heat generation and road transport. In this context, we characterize the role of renewable electricity and contrast two rivalling narratives: direct and indirect electrification. On the one hand, electricity directly provides heat and transport, using electric heat pumps, electric heaters, and battery electric vehicles. On the other hand, electricity, heat, and transport are indirectly linked, using gas heat pumps, gas heaters, fuel cell electric vehicles, and internal combustion engine vehicles, in combination with power-to-gas and power-to-liquid processes. To reach climate policy targets, our findings imply that energy stakeholders must (1) plan for the significant additional demand for renewable electricity for heat and road transport, (2) pave the way for system-friendly direct heat electrification, (3) be aware of technological uncertainties in the transport sector, (4) clarify the vision for decarbonisation, particularly for road transport, and (5) use holistic and more comparable scenario frameworks
    corecore